Feat:Contract Security and Optimization Improvements#21
Feat:Contract Security and Optimization Improvements#21sotoJ24 wants to merge 4 commits intotrustbridgecr:mainfrom
Conversation
WalkthroughAdds four new contracts (Access Control Manager, MEV Protection, Oracle Aggregator, Security Guardian), reworks oracle- and pool-related modules toward multi-source price aggregation and governance, expands workspace membership, and introduces batch operations and tests for gas optimization and security/emergency behaviors. Primarily new modules, manifests, and storage/control logic. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Admin
participant ACM as AccessControlManager
participant Storage
Admin->>ACM: initialize(super_admin)
ACM->>Storage: set initialized + default roles + grant SUPER_ADMIN
Note over ACM,Storage: Initialization guarded against re-entry
Admin->>ACM: set_action_roles(contract, action, roles)
ACM->>Storage: persist action→roles mapping
Admin->>ACM: grant_role(role, account, granter)
ACM->>Storage: set user-role entry
ACM-->>Admin: Result
sequenceDiagram
autonumber
actor Caller
participant OA as OracleAggregator
participant Sources as Oracle Sources
participant Store as Storage
Caller->>OA: get_price(asset)
OA->>Store: load sources, config, heartbeat
loop active sources
OA->>Sources: fetch price
Sources-->>OA: (price, ts)
end
OA->>OA: weighted average, deviation check
OA->>Store: persist aggregated price + ts
OA-->>Caller: (price, ts)
sequenceDiagram
autonumber
actor Guardian
participant SG as SecurityGuardian
participant Protocol as Protocol Contracts (...)
participant Store as Storage
Guardian->>SG: emergency_pause_all(reason)
SG->>Guardian: authenticate
loop each contract
SG->>Protocol: pause()
Protocol-->>SG: ack/err
end
SG->>Store: save pause metadata
SG-->>Guardian: Result
sequenceDiagram
autonumber
actor Trader
participant Pool
participant Store as Storage
Trader->>Pool: batch_operations([ops...])
Pool->>Trader: require_auth
Pool->>Pool: validate all ops
loop for each op
Pool->>Store: read/write as needed
Pool-->>Trader: append PoolResult
end
Pool-->>Trader: Vec<PoolResult>
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (4 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (13)
Cargo.toml(1 hunks)contracts/access-control-manager/Cargo.toml(1 hunks)contracts/access-control-manager/src/lib.rs(1 hunks)contracts/mev-protection/Cargo.toml(1 hunks)contracts/mev-protection/src/contract.rs(1 hunks)contracts/oracle-aggregator/src/contract.rs(1 hunks)contracts/oracle/src/contract.rs(1 hunks)contracts/pool/src/contract.rs(1 hunks)contracts/pool/src/pool/pool.rs(2 hunks)contracts/pool/src/pool/test/test_gas_optimizations.rs(1 hunks)contracts/security-guardian/Cargo.toml(1 hunks)contracts/security-guardian/src/lib.rs(1 hunks)contracts/security-guardian/test/security_test.rs(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
contracts/oracle/src/contract.rs (1)
contracts/oracle/src/events.rs (1)
initialized(9-14)
contracts/pool/src/pool/pool.rs (1)
contracts/pool/src/events.rs (3)
supply(185-188)borrow(263-266)repay(278-281)
contracts/pool/src/contract.rs (2)
contracts/mev-protection/src/contract.rs (1)
set_config(51-53)contracts/oracle/src/events.rs (1)
initialized(9-14)
| /// Check if account has role | ||
| pub fn has_role(env: Env, role: Bytes, account: Address) -> bool { | ||
| env.storage().persistent().has(&DataKey::UserRole(account, role)) | ||
| } | ||
|
|
||
| /// Check if account can perform action on contract | ||
| pub fn can_perform_action( | ||
| env: Env, | ||
| account: Address, | ||
| contract: Address, | ||
| action: String | ||
| ) -> bool { | ||
| // Check if account has specific permission | ||
| if env.storage().persistent().has(&DataKey::Permission(account.clone(), contract.clone(), action.clone())) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check role-based permissions | ||
| let required_roles = Self::get_required_roles(&env, &contract, &action); | ||
|
|
||
| for role in required_roles { | ||
| if Self::has_role(env.clone(), role, account.clone()) { | ||
| return true; | ||
| } | ||
| } | ||
|
|
||
| false | ||
| } | ||
|
|
||
| /// Set action permission requirements | ||
| pub fn set_action_roles( | ||
| env: Env, | ||
| admin: Address, | ||
| contract: Address, | ||
| action: String, | ||
| required_roles: Vec<Bytes> | ||
| ) -> Result<(), AccessControlError> { | ||
| admin.require_auth(); | ||
| Self::require_role(&env, &admin, &ADMIN_ROLE)?; | ||
|
|
||
| env.storage().persistent().set( | ||
| &DataKey::ActionRoles(contract.clone(), action.clone()), | ||
| &required_roles | ||
| ); | ||
|
|
||
| emit_action_roles_updated(&env, contract, action, required_roles); | ||
| Ok(()) | ||
| } | ||
|
|
||
| /// Emergency role assignment (super admin only) | ||
| pub fn emergency_grant_role( | ||
| env: Env, | ||
| super_admin: Address, | ||
| role: Bytes, | ||
| account: Address, | ||
| duration: u64 // Temporary role duration in seconds | ||
| ) -> Result<(), AccessControlError> { | ||
| super_admin.require_auth(); | ||
| Self::require_role(&env, &super_admin, &SUPER_ADMIN_ROLE)?; | ||
|
|
||
| let expiry = env.ledger().timestamp() + duration; | ||
| env.storage().persistent().set( | ||
| &DataKey::TemporaryRole(account.clone(), role.clone()), | ||
| &expiry | ||
| ); | ||
|
|
||
| emit_emergency_role_granted(&env, role, account, expiry); | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Emergency grants never authorize anyone
emergency_grant_role records a TemporaryRole, but both has_role and can_perform_action only consult UserRole. As written, the temporary grant is ignored, so the guardian path can’t actually exercise the role even before expiry. Please make has_role honor temporary roles (and clear expired ones) so emergency assignments work.
pub fn has_role(env: Env, role: Bytes, account: Address) -> bool {
- env.storage().persistent().has(&DataKey::UserRole(account, role))
+ if env.storage().persistent().has(&DataKey::UserRole(account.clone(), role.clone())) {
+ return true;
+ }
+
+ if let Some(expiry) = env.storage().persistent().get(&DataKey::TemporaryRole(account.clone(), role.clone())) {
+ if env.ledger().timestamp() <= expiry {
+ return true;
+ }
+ env.storage().persistent().remove(&DataKey::TemporaryRole(account, role));
+ }
+ false
}| use soroban_sdk::{Address, Env, Symbol, Vec}; | ||
| use crate::{ProtectionConfig, TradeRecord}; | ||
|
|
||
| // Storage keys | ||
| #[derive(Clone)] | ||
| pub enum DataKey { | ||
| Initialized, | ||
| Admin, | ||
| Config, | ||
| TradeHistory(Address, u32), // (asset, block) -> Vec<TradeRecord> | ||
| TradesCurrentBlock(Address), // trader -> count | ||
| CurrentBlock, // last processed block number | ||
| FlaggedAddress(Address), // address -> (bool, reason) | ||
| PoolLiquidity(Address), // asset -> liquidity amount | ||
| CurrentPrice(Address), // asset -> current price | ||
| } |
There was a problem hiding this comment.
DataKey needs #[contracttype] to compile.
Without the #[contracttype] macro (and its import) the enum cannot be used as a Soroban storage key, so every instance()/persistent()/temporary() call here fails to compile. Please annotate the enum accordingly.
Apply this diff:
-use soroban_sdk::{Address, Env, Symbol, Vec};
+use soroban_sdk::{contracttype, Address, Env, Symbol, Vec};
-#[derive(Clone)]
+#[derive(Clone)]
+#[contracttype]
pub enum DataKey {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use soroban_sdk::{Address, Env, Symbol, Vec}; | |
| use crate::{ProtectionConfig, TradeRecord}; | |
| // Storage keys | |
| #[derive(Clone)] | |
| pub enum DataKey { | |
| Initialized, | |
| Admin, | |
| Config, | |
| TradeHistory(Address, u32), // (asset, block) -> Vec<TradeRecord> | |
| TradesCurrentBlock(Address), // trader -> count | |
| CurrentBlock, // last processed block number | |
| FlaggedAddress(Address), // address -> (bool, reason) | |
| PoolLiquidity(Address), // asset -> liquidity amount | |
| CurrentPrice(Address), // asset -> current price | |
| } | |
| use soroban_sdk::{contracttype, Address, Env, Symbol, Vec}; | |
| use crate::{ProtectionConfig, TradeRecord}; | |
| // Storage keys | |
| #[derive(Clone)] | |
| #[contracttype] | |
| pub enum DataKey { | |
| Initialized, | |
| Admin, | |
| Config, | |
| TradeHistory(Address, u32), // (asset, block) -> Vec<TradeRecord> | |
| TradesCurrentBlock(Address), // trader -> count | |
| CurrentBlock, // last processed block number | |
| FlaggedAddress(Address), // address -> (bool, reason) | |
| PoolLiquidity(Address), // asset -> liquidity amount | |
| CurrentPrice(Address), // asset -> current price | |
| } |
🤖 Prompt for AI Agents
In contracts/mev-protection/src/contract.rs around lines 1 to 16, the DataKey
enum is missing the #[contracttype] annotation and its import, so it cannot be
used as a Soroban storage key; add `use soroban_sdk::contracttype;` to the
imports and annotate the enum with `#[contracttype]` immediately above `pub enum
DataKey { ... }` so Soroban can serialize/deserialize it for storage access.
| TradeHistory(Address, u32), // (asset, block) -> Vec<TradeRecord> | ||
| TradesCurrentBlock(Address), // trader -> count | ||
| CurrentBlock, // last processed block number | ||
| FlaggedAddress(Address), // address -> (bool, reason) | ||
| PoolLiquidity(Address), // asset -> liquidity amount | ||
| CurrentPrice(Address), // asset -> current price | ||
| } | ||
|
|
||
| const DAY_IN_LEDGERS: u32 = 17280; | ||
| const INSTANCE_BUMP_AMOUNT: u32 = 7 * DAY_IN_LEDGERS; | ||
| const INSTANCE_LIFETIME_THRESHOLD: u32 = INSTANCE_BUMP_AMOUNT - DAY_IN_LEDGERS; | ||
|
|
||
| pub fn extend_instance(e: &Env) { | ||
| e.storage() | ||
| .instance() | ||
| .extend_ttl(INSTANCE_LIFETIME_THRESHOLD, INSTANCE_BUMP_AMOUNT); | ||
| } | ||
|
|
||
| // Initialization | ||
| pub fn is_initialized(e: &Env) -> bool { | ||
| e.storage().instance().has(&DataKey::Initialized) | ||
| } | ||
|
|
||
| pub fn set_initialized(e: &Env) { | ||
| e.storage().instance().set(&DataKey::Initialized, &true); | ||
| } | ||
|
|
||
| // Admin | ||
| pub fn get_admin(e: &Env) -> Address { | ||
| e.storage().instance().get(&DataKey::Admin).unwrap() | ||
| } | ||
|
|
||
| pub fn set_admin(e: &Env, admin: &Address) { | ||
| e.storage().instance().set(&DataKey::Admin, admin); | ||
| } | ||
|
|
||
| // Configuration | ||
| pub fn get_config(e: &Env) -> ProtectionConfig { | ||
| e.storage().instance().get(&DataKey::Config).unwrap() | ||
| } | ||
|
|
||
| pub fn set_config(e: &Env, config: &ProtectionConfig) { | ||
| e.storage().instance().set(&DataKey::Config, config); | ||
| } | ||
|
|
||
| // Trade records | ||
| pub fn add_trade_record(e: &Env, trade: &TradeRecord) { | ||
| let block = trade.block; | ||
| let asset = trade.asset.clone(); | ||
|
|
||
| let mut trades = get_trades_for_block(e, &asset, block); | ||
| trades.push_back(trade.clone()); | ||
|
|
||
| e.storage() | ||
| .temporary() | ||
| .set(&DataKey::TradeHistory(asset, block), &trades); | ||
|
|
||
| // Update current block tracker | ||
| update_current_block(e, block); | ||
|
|
||
| // Cleanup old trades (keep last 100 blocks) | ||
| cleanup_old_trades(e, &asset, block); | ||
| } | ||
|
|
||
| fn get_trades_for_block(e: &Env, asset: &Address, block: u32) -> Vec<TradeRecord> { | ||
| e.storage() | ||
| .temporary() | ||
| .get(&DataKey::TradeHistory(asset.clone(), block)) | ||
| .unwrap_or(Vec::new(e)) | ||
| } | ||
|
|
||
| pub fn get_trades_in_window( | ||
| e: &Env, | ||
| asset: &Address, | ||
| start_block: u32, | ||
| end_block: u32, | ||
| ) -> Vec<TradeRecord> { | ||
| let mut all_trades = Vec::new(e); | ||
|
|
||
| for block in start_block..=end_block { | ||
| let trades = get_trades_for_block(e, asset, block); | ||
| for i in 0..trades.len() { | ||
| all_trades.push_back(trades.get(i).unwrap()); | ||
| } | ||
| } | ||
|
|
||
| all_trades | ||
| } | ||
|
|
||
| fn update_current_block(e: &Env, block: u32) { | ||
| let current = e.storage() | ||
| .instance() | ||
| .get(&DataKey::CurrentBlock) | ||
| .unwrap_or(0u32); | ||
|
|
||
| if block > current { | ||
| e.storage().instance().set(&DataKey::CurrentBlock, &block); | ||
| // Reset trades count for new block | ||
| reset_all_trades_counts(e); | ||
| } | ||
| } | ||
|
|
||
| fn cleanup_old_trades(e: &Env, asset: &Address, current_block: u32) { | ||
| if current_block > 100 { | ||
| let old_block = current_block - 100; | ||
| e.storage() | ||
| .temporary() | ||
| .remove(&DataKey::TradeHistory(asset.clone(), old_block)); | ||
| } | ||
| } | ||
|
|
||
| // Trades per block tracking (for rate limiting) | ||
| pub fn get_trades_count_current_block(e: &Env, trader: &Address) -> u32 { | ||
| e.storage() | ||
| .temporary() | ||
| .get(&DataKey::TradesCurrentBlock(trader.clone())) | ||
| .unwrap_or(0u32) | ||
| } | ||
|
|
||
| pub fn increment_trades_count_current_block(e: &Env, trader: &Address) { | ||
| let count = get_trades_count_current_block(e, trader); | ||
| e.storage() | ||
| .temporary() | ||
| .set(&DataKey::TradesCurrentBlock(trader.clone()), &(count + 1)); | ||
| } | ||
|
|
||
| fn reset_all_trades_counts(e: &Env) { | ||
| // In production, you'd want to track all active traders | ||
| // For now, counts will naturally reset when block changes | ||
| // since we check current block in update_current_block | ||
| } |
There was a problem hiding this comment.
Per-block trade counts never reset.
TradesCurrentBlock(Address) is keyed only by trader, so once a trader hits the block limit the counter never clears (the empty reset_all_trades_counts is never removing the entry). That permanently rate-limits the address. Key the counter by (trader, block) instead and drop the no-op reset.
Apply this diff:
- TradesCurrentBlock(Address), // trader -> count
+ TradesCurrentBlock(Address, u32), // (trader, block) -> count
@@
- e.storage().instance().set(&DataKey::CurrentBlock, &block);
- // Reset trades count for new block
- reset_all_trades_counts(e);
+ e.storage().instance().set(&DataKey::CurrentBlock, &block);
@@
-pub fn get_trades_count_current_block(e: &Env, trader: &Address) -> u32 {
- e.storage()
- .temporary()
- .get(&DataKey::TradesCurrentBlock(trader.clone()))
- .unwrap_or(0u32)
+pub fn get_trades_count_current_block(e: &Env, trader: &Address) -> u32 {
+ let block = e.ledger().sequence();
+ e.storage()
+ .temporary()
+ .get(&DataKey::TradesCurrentBlock(trader.clone(), block))
+ .unwrap_or(0u32)
}
pub fn increment_trades_count_current_block(e: &Env, trader: &Address) {
- let count = get_trades_count_current_block(e, trader);
- e.storage()
- .temporary()
- .set(&DataKey::TradesCurrentBlock(trader.clone()), &(count + 1));
-}
-
-fn reset_all_trades_counts(e: &Env) {
- // In production, you'd want to track all active traders
- // For now, counts will naturally reset when block changes
- // since we check current block in update_current_block
+ let block = e.ledger().sequence();
+ let key = DataKey::TradesCurrentBlock(trader.clone(), block);
+ let count = e
+ .storage()
+ .temporary()
+ .get(&key)
+ .unwrap_or(0u32);
+ e.storage().temporary().set(&key, &(count + 1));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| TradeHistory(Address, u32), // (asset, block) -> Vec<TradeRecord> | |
| TradesCurrentBlock(Address), // trader -> count | |
| CurrentBlock, // last processed block number | |
| FlaggedAddress(Address), // address -> (bool, reason) | |
| PoolLiquidity(Address), // asset -> liquidity amount | |
| CurrentPrice(Address), // asset -> current price | |
| } | |
| const DAY_IN_LEDGERS: u32 = 17280; | |
| const INSTANCE_BUMP_AMOUNT: u32 = 7 * DAY_IN_LEDGERS; | |
| const INSTANCE_LIFETIME_THRESHOLD: u32 = INSTANCE_BUMP_AMOUNT - DAY_IN_LEDGERS; | |
| pub fn extend_instance(e: &Env) { | |
| e.storage() | |
| .instance() | |
| .extend_ttl(INSTANCE_LIFETIME_THRESHOLD, INSTANCE_BUMP_AMOUNT); | |
| } | |
| // Initialization | |
| pub fn is_initialized(e: &Env) -> bool { | |
| e.storage().instance().has(&DataKey::Initialized) | |
| } | |
| pub fn set_initialized(e: &Env) { | |
| e.storage().instance().set(&DataKey::Initialized, &true); | |
| } | |
| // Admin | |
| pub fn get_admin(e: &Env) -> Address { | |
| e.storage().instance().get(&DataKey::Admin).unwrap() | |
| } | |
| pub fn set_admin(e: &Env, admin: &Address) { | |
| e.storage().instance().set(&DataKey::Admin, admin); | |
| } | |
| // Configuration | |
| pub fn get_config(e: &Env) -> ProtectionConfig { | |
| e.storage().instance().get(&DataKey::Config).unwrap() | |
| } | |
| pub fn set_config(e: &Env, config: &ProtectionConfig) { | |
| e.storage().instance().set(&DataKey::Config, config); | |
| } | |
| // Trade records | |
| pub fn add_trade_record(e: &Env, trade: &TradeRecord) { | |
| let block = trade.block; | |
| let asset = trade.asset.clone(); | |
| let mut trades = get_trades_for_block(e, &asset, block); | |
| trades.push_back(trade.clone()); | |
| e.storage() | |
| .temporary() | |
| .set(&DataKey::TradeHistory(asset, block), &trades); | |
| // Update current block tracker | |
| update_current_block(e, block); | |
| // Cleanup old trades (keep last 100 blocks) | |
| cleanup_old_trades(e, &asset, block); | |
| } | |
| fn get_trades_for_block(e: &Env, asset: &Address, block: u32) -> Vec<TradeRecord> { | |
| e.storage() | |
| .temporary() | |
| .get(&DataKey::TradeHistory(asset.clone(), block)) | |
| .unwrap_or(Vec::new(e)) | |
| } | |
| pub fn get_trades_in_window( | |
| e: &Env, | |
| asset: &Address, | |
| start_block: u32, | |
| end_block: u32, | |
| ) -> Vec<TradeRecord> { | |
| let mut all_trades = Vec::new(e); | |
| for block in start_block..=end_block { | |
| let trades = get_trades_for_block(e, asset, block); | |
| for i in 0..trades.len() { | |
| all_trades.push_back(trades.get(i).unwrap()); | |
| } | |
| } | |
| all_trades | |
| } | |
| fn update_current_block(e: &Env, block: u32) { | |
| let current = e.storage() | |
| .instance() | |
| .get(&DataKey::CurrentBlock) | |
| .unwrap_or(0u32); | |
| if block > current { | |
| e.storage().instance().set(&DataKey::CurrentBlock, &block); | |
| // Reset trades count for new block | |
| reset_all_trades_counts(e); | |
| } | |
| } | |
| fn cleanup_old_trades(e: &Env, asset: &Address, current_block: u32) { | |
| if current_block > 100 { | |
| let old_block = current_block - 100; | |
| e.storage() | |
| .temporary() | |
| .remove(&DataKey::TradeHistory(asset.clone(), old_block)); | |
| } | |
| } | |
| // Trades per block tracking (for rate limiting) | |
| pub fn get_trades_count_current_block(e: &Env, trader: &Address) -> u32 { | |
| e.storage() | |
| .temporary() | |
| .get(&DataKey::TradesCurrentBlock(trader.clone())) | |
| .unwrap_or(0u32) | |
| } | |
| pub fn increment_trades_count_current_block(e: &Env, trader: &Address) { | |
| let count = get_trades_count_current_block(e, trader); | |
| e.storage() | |
| .temporary() | |
| .set(&DataKey::TradesCurrentBlock(trader.clone()), &(count + 1)); | |
| } | |
| fn reset_all_trades_counts(e: &Env) { | |
| // In production, you'd want to track all active traders | |
| // For now, counts will naturally reset when block changes | |
| // since we check current block in update_current_block | |
| } | |
| // --- in the DataKey enum --- | |
| TradeHistory(Address, u32), // (asset, block) -> Vec<TradeRecord> | |
| TradesCurrentBlock(Address, u32), // (trader, block) -> count | |
| CurrentBlock, // last processed block number | |
| // --- update_current_block no longer calls the no-op reset --- | |
| fn update_current_block(e: &Env, block: u32) { | |
| let current = e.storage() | |
| .instance() | |
| .get(&DataKey::CurrentBlock) | |
| .unwrap_or(0u32); | |
| if block > current { | |
| e.storage().instance().set(&DataKey::CurrentBlock, &block); | |
| } | |
| } | |
| // --- get the count keyed by (trader, block) --- | |
| pub fn get_trades_count_current_block(e: &Env, trader: &Address) -> u32 { | |
| let block = e.ledger().sequence(); | |
| e.storage() | |
| .temporary() | |
| .get(&DataKey::TradesCurrentBlock(trader.clone(), block)) | |
| .unwrap_or(0u32) | |
| } | |
| // --- increment likewise keys by (trader, block) and drops reset_all_trades_counts --- | |
| pub fn increment_trades_count_current_block(e: &Env, trader: &Address) { | |
| let block = e.ledger().sequence(); | |
| let key = DataKey::TradesCurrentBlock(trader.clone(), block); | |
| let count = e | |
| .storage() | |
| .temporary() | |
| .get(&key) | |
| .unwrap_or(0u32); | |
| e.storage().temporary().set(&key, &(count + 1)); | |
| } | |
| // --- the old `fn reset_all_trades_counts` is removed entirely --- |
| match Self::get_oracle_price(&env, &source.oracle, &asset) { | ||
| Ok((price, timestamp)) => { | ||
| // Check heartbeat | ||
| if current_time - timestamp <= heartbeat_timeout { | ||
| valid_prices.push_back((price, source.weight, timestamp)); | ||
| total_weight += source.weight; | ||
| } | ||
| }, | ||
| Err(_) => continue, // Skip failing oracles |
There was a problem hiding this comment.
Guard against future-timestamp underflow
On Line 87 the code evaluates current_time - timestamp without checking that timestamp <= current_time. A malicious or simply skewed oracle that reports a future timestamp will cause a subtraction underflow and panic, DoS-ing aggregation. Please skip such readings before subtracting:
- if current_time - timestamp <= heartbeat_timeout {
+ if timestamp > current_time {
+ continue;
+ }
+ if current_time - timestamp <= heartbeat_timeout {
valid_prices.push_back((price, source.weight, timestamp));
total_weight += source.weight;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| match Self::get_oracle_price(&env, &source.oracle, &asset) { | |
| Ok((price, timestamp)) => { | |
| // Check heartbeat | |
| if current_time - timestamp <= heartbeat_timeout { | |
| valid_prices.push_back((price, source.weight, timestamp)); | |
| total_weight += source.weight; | |
| } | |
| }, | |
| Err(_) => continue, // Skip failing oracles | |
| match Self::get_oracle_price(&env, &source.oracle, &asset) { | |
| Ok((price, timestamp)) => { | |
| // Check heartbeat | |
| if timestamp > current_time { | |
| continue; | |
| } | |
| if current_time - timestamp <= heartbeat_timeout { | |
| valid_prices.push_back((price, source.weight, timestamp)); | |
| total_weight += source.weight; | |
| } | |
| }, | |
| Err(_) => continue, // Skip failing oracles |
🤖 Prompt for AI Agents
In contracts/oracle-aggregator/src/contract.rs around lines 84 to 92 the code
subtracts timestamp from current_time without ensuring timestamp <= current_time
which can underflow if an oracle reports a future timestamp; change the check to
first skip any readings with timestamp > current_time (e.g., continue), and only
perform the subtraction and heartbeat_timeout comparison when timestamp <=
current_time so underflow cannot occur.
| let mut weighted_sum = 0i128; | ||
| let mut latest_timestamp = 0u64; | ||
|
|
||
| for (price, weight, timestamp) in valid_prices.iter() { | ||
| weighted_sum += price * (*weight as i128); | ||
| latest_timestamp = latest_timestamp.max(*timestamp); | ||
| } | ||
|
|
||
| let aggregated_price = weighted_sum / (total_weight as i128); | ||
|
|
||
| // Validate price deviation | ||
| Self::validate_price_deviation(&env, &asset, aggregated_price, &valid_prices)?; | ||
|
|
There was a problem hiding this comment.
Prevent division by zero during aggregation
Lines 109-112 divide by total_weight and later use aggregated_price as a divisor. If every valid source has weight 0 (allowed today) or their weighted prices sum to 0, the contract will panic. Please bail out before dividing when the denominator would be zero:
// Calculate weighted average
let mut weighted_sum = 0i128;
let mut latest_timestamp = 0u64;
for (price, weight, timestamp) in valid_prices.iter() {
weighted_sum += price * (*weight as i128);
latest_timestamp = latest_timestamp.max(*timestamp);
}
- let aggregated_price = weighted_sum / (total_weight as i128);
+ if total_weight == 0 {
+ return Err(OracleError::InsufficientValidPrices);
+ }
+ let aggregated_price = weighted_sum / (total_weight as i128);
+ if aggregated_price == 0 {
+ return Err(OracleError::InsufficientValidPrices);
+ }🤖 Prompt for AI Agents
In contracts/oracle-aggregator/src/contract.rs around lines 101-113, the code
divides by total_weight and later uses aggregated_price as a divisor; add guards
to prevent division-by-zero: check if total_weight == 0 and return an
appropriate contract error (e.g., Err(ContractError::ZeroTotalWeight) or use
ensure!) before computing aggregated_price, and after computing aggregated_price
check if aggregated_price == 0 and return an error (e.g.,
Err(ContractError::ZeroAggregatedPrice)) so no subsequent divisions will panic.
| for (price, _, _) in prices.iter() { | ||
| let deviation = if *price > aggregated_price { | ||
| ((*price - aggregated_price) * 10000) / aggregated_price | ||
| } else { | ||
| ((aggregated_price - *price) * 10000) / aggregated_price | ||
| }; | ||
|
|
||
| if deviation > deviation_threshold as i128 { | ||
| emit_price_deviation_alert(env, asset.clone(), *price, aggregated_price, deviation); | ||
| // Don't fail, but log for monitoring |
There was a problem hiding this comment.
Avoid dividing by zero in deviation checks
validate_price_deviation divides by aggregated_price (Line 159). When the aggregated price is zero (possible after the current averaging logic), this panics. Add an early return before the loop when aggregated_price == 0 so we never divide by zero.
🤖 Prompt for AI Agents
In contracts/oracle-aggregator/src/contract.rs around lines 157 to 166, the
deviation calculation divides by aggregated_price and can panic when
aggregated_price == 0; add an early check before the for loop that returns (or
otherwise exits the function) if aggregated_price == 0 to avoid any
division-by-zero, ensuring no further deviation checks run when the aggregate is
zero.
| if let Ok((last_price, last_timestamp)) = env.storage().persistent() | ||
| .get::<DataKey, (i128, u64)>(&DataKey::AggregatedPrice(asset.clone())) { | ||
|
|
||
| let time_diff = env.ledger().timestamp() - last_timestamp; | ||
|
|
||
| // Check for dramatic price changes in short time | ||
| if time_diff < 300 { // 5 minutes | ||
| let price_change = if new_price > last_price { | ||
| ((new_price - last_price) * 10000) / last_price | ||
| } else { | ||
| ((last_price - new_price) * 10000) / last_price | ||
| }; | ||
|
|
||
| // Trigger circuit breaker for >50% price change in 5 minutes | ||
| if price_change > 5000 { | ||
| emit_circuit_breaker_triggered(&env, asset, last_price, new_price, time_diff); | ||
| return Ok(true); | ||
| } | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Fix persistent storage read compilation error
Env::storage().persistent().get returns an Option, not a Result. The pattern if let Ok(...) (Line 181) will not compile. Replace it with an Option match:
- if let Ok((last_price, last_timestamp)) = env.storage().persistent()
+ if let Some((last_price, last_timestamp)) = env.storage().persistent()
.get::<DataKey, (i128, u64)>(&DataKey::AggregatedPrice(asset.clone())) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let Ok((last_price, last_timestamp)) = env.storage().persistent() | |
| .get::<DataKey, (i128, u64)>(&DataKey::AggregatedPrice(asset.clone())) { | |
| let time_diff = env.ledger().timestamp() - last_timestamp; | |
| // Check for dramatic price changes in short time | |
| if time_diff < 300 { // 5 minutes | |
| let price_change = if new_price > last_price { | |
| ((new_price - last_price) * 10000) / last_price | |
| } else { | |
| ((last_price - new_price) * 10000) / last_price | |
| }; | |
| // Trigger circuit breaker for >50% price change in 5 minutes | |
| if price_change > 5000 { | |
| emit_circuit_breaker_triggered(&env, asset, last_price, new_price, time_diff); | |
| return Ok(true); | |
| } | |
| } | |
| } | |
| if let Some((last_price, last_timestamp)) = env.storage().persistent() | |
| .get::<DataKey, (i128, u64)>(&DataKey::AggregatedPrice(asset.clone())) { | |
| let time_diff = env.ledger().timestamp() - last_timestamp; | |
| // Check for dramatic price changes in short time | |
| if time_diff < 300 { // 5 minutes | |
| let price_change = if new_price > last_price { | |
| ((new_price - last_price) * 10000) / last_price | |
| } else { | |
| ((last_price - new_price) * 10000) / last_price | |
| }; | |
| // Trigger circuit breaker for >50% price change in 5 minutes | |
| if price_change > 5000 { | |
| emit_circuit_breaker_triggered(&env, asset, last_price, new_price, time_diff); | |
| return Ok(true); | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In contracts/oracle-aggregator/src/contract.rs around lines 179 to 199, the
persistent storage read uses `if let Ok(...)` but
`Env::storage().persistent().get` returns an Option, causing a compilation
error; change the pattern to `if let Some((last_price, last_timestamp)) =
env.storage().persistent().get::<DataKey, (i128,
u64)>(&DataKey::AggregatedPrice(asset.clone())) { ... }` so you unwrap the
Option instead of a Result, keeping the same inner logic and variable names
(last_price, last_timestamp) and leaving the rest of the circuit-breaker logic
unchanged.
| let deviation_bps = calculate_deviation_bps(price, prev_price_data.price); | ||
| if deviation_bps > config.max_price_deviation_bps { | ||
| // Price change too large - auto-pause | ||
| Self::auto_pause(&e, Symbol::new(&e, "deviation")); | ||
| panic_with_error!(&e, OracleError::PriceDeviationExceeded); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Auto-pause never persists because the panic rollbacks the state
auto_pause writes the circuit-breaker flag, but the subsequent panic_with_error! reverts the entire invocation on Soroban, so the pause flag is rolled back and the contract stays live. The protection never actually activates.
Please set the breaker and exit without panicking (or refactor to return a Result) so the pause survives the deviation check.
- Self::auto_pause(&e, Symbol::new(&e, "deviation"));
- panic_with_error!(&e, OracleError::PriceDeviationExceeded);
+ Self::auto_pause(&e, Symbol::new(&e, "deviation"));
+ return;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let deviation_bps = calculate_deviation_bps(price, prev_price_data.price); | |
| if deviation_bps > config.max_price_deviation_bps { | |
| // Price change too large - auto-pause | |
| Self::auto_pause(&e, Symbol::new(&e, "deviation")); | |
| panic_with_error!(&e, OracleError::PriceDeviationExceeded); | |
| } | |
| let deviation_bps = calculate_deviation_bps(price, prev_price_data.price); | |
| if deviation_bps > config.max_price_deviation_bps { | |
| // Price change too large - auto-pause | |
| Self::auto_pause(&e, Symbol::new(&e, "deviation")); | |
| return; | |
| } |
🤖 Prompt for AI Agents
contracts/oracle/src/contract.rs lines 190-195: auto_pause currently writes the
circuit-breaker flag but the following panic_with_error! rolls back the
transaction so the pause never persists; instead, after calling
Self::auto_pause(&e, Symbol::new(&e, "deviation")), avoid panicking—either
return early from this function (if its signature allows) or refactor the
function to return a Result and return an
Err(OracleError::PriceDeviationExceeded) so the state write persists; ensure you
remove the panic and propagate the error/early exit path so the breaker flag
remains set.
| for operation in &operations { | ||
| Self::validate_operation(&env, &user, operation)?; | ||
| } |
There was a problem hiding this comment.
Restore the missing validate_operation implementation
batch_operations calls Self::validate_operation (Line 44), but this method does not exist anywhere in this impl. The code will fail to compile with an unresolved associated function. Please add the implementation (or adjust the call) before merging.
🤖 Prompt for AI Agents
In contracts/pool/src/pool/pool.rs around lines 43–45, batch_operations calls
Self::validate_operation but that associated function is missing; add an
implementation of validate_operation on the same impl block (or adjust the call)
so the code compiles. Implement a method with the same visibility/signature
expected by the call (e.g., fn validate_operation(env: &EnvType, user:
&UserType, operation: &OperationType) -> Result<(), ErrorType>) that performs
the operation checks and returns Err on invalid operations, or replace the call
with the appropriate existing validation function if one already exists.
| let protocol_contracts = Self::get_protocol_contracts(&env); | ||
|
|
||
| for contract in protocol_contracts { | ||
| // Pause each contract | ||
| env.try_invoke_contract(&contract, &symbol_short!("pause"), &()); | ||
| } | ||
|
|
||
| env.storage().instance().set(&DataKey::EmergencyPaused, &true); | ||
| env.storage().instance().set(&DataKey::PauseReason, &reason); | ||
| env.storage().instance().set(&DataKey::PausedAt, &env.ledger().timestamp()); | ||
| env.storage().instance().set(&DataKey::PausedBy, &guardian); | ||
|
|
||
| emit_emergency_pause_all(&env, guardian, reason); |
There was a problem hiding this comment.
Handle failed pause calls
We swallow every try_invoke_contract result. If any target rejects the pause, we still mark the system as paused, leaving contracts live while state says otherwise. Please propagate the error (or mark the offending contract) instead of ignoring it so the guardian can react meaningfully.
- for contract in protocol_contracts {
- // Pause each contract
- env.try_invoke_contract(&contract, &symbol_short!("pause"), &());
- }
+ for contract in protocol_contracts {
+ if let Err(status) = env.try_invoke_contract(&contract, &symbol_short!("pause"), &()) {
+ return Err(SecurityError::PauseFailed(contract, status));
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let protocol_contracts = Self::get_protocol_contracts(&env); | |
| for contract in protocol_contracts { | |
| // Pause each contract | |
| env.try_invoke_contract(&contract, &symbol_short!("pause"), &()); | |
| } | |
| env.storage().instance().set(&DataKey::EmergencyPaused, &true); | |
| env.storage().instance().set(&DataKey::PauseReason, &reason); | |
| env.storage().instance().set(&DataKey::PausedAt, &env.ledger().timestamp()); | |
| env.storage().instance().set(&DataKey::PausedBy, &guardian); | |
| emit_emergency_pause_all(&env, guardian, reason); | |
| let protocol_contracts = Self::get_protocol_contracts(&env); | |
| for contract in protocol_contracts { | |
| if let Err(status) = env.try_invoke_contract(&contract, &symbol_short!("pause"), &()) { | |
| return Err(SecurityError::PauseFailed(contract, status)); | |
| } | |
| } | |
| env.storage().instance().set(&DataKey::EmergencyPaused, &true); | |
| env.storage().instance().set(&DataKey::PauseReason, &reason); | |
| env.storage().instance().set(&DataKey::PausedAt, &env.ledger().timestamp()); | |
| env.storage().instance().set(&DataKey::PausedBy, &guardian); | |
| emit_emergency_pause_all(&env, guardian, reason); |
Pull Request for TrustBridge - Close Issue
❗ Pull Request Information
closes: #19
This PR implements comprehensive security enhancements and advanced protection mechanisms across TrustBridge contracts, addressing critical vulnerabilities in oracle systems, flash loan protection, access control, and MEV attack vectors.
🌀 Summary of Changes
1. Enhanced Oracle Security (Secure TrustBridge Oracle)
2. Access Control Manager Contract
3. MEV Protection Router Contract
4. Project Structure Updates
Security improvements verified:
📂 Files Created/Modified
New Contracts
contracts/access-control-manager/- Complete RBAC systemcontracts/mev-protection/- MEV attack preventioncontracts/oracle/- Multi-source aggregationCargo.toml- Workspace configuration and dependency management📂 Related Issue
This pull request will close #19 upon merging.
Summary by CodeRabbit
New Features
Tests
Chores